可愛鯨魚艦隊要來了嗎~
統整一下安裝版本資訊:
v1.25.1
20.10.18
v0.2.5
接下來的部分會分成 Control Plane 和 Worker Node
步驟簡介:
那就讓我們開始串起四海之內的可愛鯨魚吧!
先選一台主機做為 cluster 的頭目鯨魚
sudo kubeadm init \
--apiserver-advertise-address=10.1.9.18 \
--pod-network-cidr=192.168.0.0/16 \
--cri-socket /var/run/cri-dockerd.sock \
--service-cidr=192.168.192.0/18
--apiserver-advertise-address
: 填主機 ip,要與其他 node 在同個網段--pod-network-cidr
: 填 cluster 內部使用的網段,不能與主機網段衝突
--cri-socket
: 填使用的 Container Runtime Interface (CRI),上一篇有先安裝了 cri-dockerd 這邊需要自己指定,否則會因為偵測到多個 CRI 而失敗--service-cidr
: 劃分給 service 的網段
.9.18
是今天日期不是你的 ip,要記得改哦!
等待 init... output...
W0916 20:30:08.119005 8333 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.25.1
...
...
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node3] and IPs [10.1.0.1 10.1.9.18]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node3] and IPs [10.1.9.18 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node3] and IPs [10.1.9.18 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
...
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.1.9.18:6443 --token r1dnp... \
--discovery-token-ca-cert-hash sha256:25c414e47...
如果要讓 non-root user 操作 cluster 需要複製一份 root 的憑證資訊到 user 自己的 home 目錄下
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
最後一段顯示 cluster ca 和 token 之後 node 會需要使用它才能加入 cluster
--token r1dnp... \
--discovery-token-ca-cert-hash sha256:25c414e47...
192.168.0.0/16
?Day 1 有提到我所在的網路環境,主機在校內就當做是 10.0.0.0/8
網段,機房內部使用168.254.0.0/16
,這兩段都不能使用不然之後會在 cluster 內部繞圈圈,最後我是選用 class C 的 private ip 192.168.0.0/16
之後還需要注意選用的 CNI plugin 設定哦!
如果不想用 private ip 網段也行,但 cluster 內部可能就再也連不到同一段的外部網段
如果跟我所在的受限網路不同也可以選其他的哦,如172.16.0.0/16
,10.10.0.0/16
,這裡附上其他 private ip:
來源:專用網路 - 維基百科,自由的百科全書
把所有機器都納入鯨魚艦隊吧~
參考前面 Control Plane 完成 init 時產生的 kubeadm join
指令拿來執行,注意要補上 --cri-socket
哦!
sudo kubeadm join 10.1.9.18:6443 --token r1dnp... \
--discovery-token-ca-cert-hash sha256:25c414e47... \
--cri-socket unix:///var/run/cri-dockerd.sock
output...
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果忘記 CA hash 可以回到 Control Plane 上用 openssl 指令取出來
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
如果 token 也忘了一樣能用指令找哦~
sudo kubeadm token list
output...
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
r1dnp... 4h 2022-09-17T20:31:35Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
如果 token 已經過期了重新產生一組,這指令還很貼心的可以把 join command 一起列出來耶~
sudo kubeadm token create --print-join-command
output... 再次提醒要補上
--cri-socket
哦~
kubeadm join 10.1.9.18:6443 --token xqin... --discovery-token-ca-cert-hash sha256:25c414e...
以上步驟做完就完成鯨魚艦隊了~~
回到 Control Plane 看看鯨魚艦隊~
kubectl get nodes
output...
NAME STATUS ROLES AGE VERSION
whale1 NotReady control-plane 20m v1.25.1
whale2 NotReady <none> 15m v1.25.1
whale3 NotReady <none> 15m v1.25.1
明天來簡介一下艦隊組織吧~ (Control Plane & Worker Node)
其實是要偷偷拿出庫存休息整理一下 bug...
再順便多找幾張鯨魚圖片~